114 research outputs found

    Disappearance of Spurious States in Analog Associative Memories

    Full text link
    We show that symmetric n-mixture states, when they exist, are almost never stable in autoassociative networks with threshold-linear units. Only with a binary coding scheme we could find a limited region of the parameter space in which either 2-mixtures or 3-mixtures are stable attractors of the dynamics.Comment: 5 pages, 3 figures, accepted for publication in Phys Rev

    Localized activity profiles and storage capacity of rate-based autoassociative networks

    Full text link
    We study analytically the effect of metrically structured connectivity on the behavior of autoassociative networks. We focus on three simple rate-based model neurons: threshold-linear, binary or smoothly saturating units. For a connectivity which is short range enough the threshold-linear network shows localized retrieval states. The saturating and binary models also exhibit spatially modulated retrieval states if the highest activity level that they can achieve is above the maximum activity of the units in the stored patterns. In the zero quenched noise limit, we derive an analytical formula for the critical value of the connectivity width below which one observes spatially non-uniform retrieval states. Localization reduces storage capacity, but only by a factor of 2~3. The approach that we present here is generic in the sense that there are no specific assumptions on the single unit input-output function nor on the exact connectivity structure.Comment: 4 pages, 4 figure

    Mean Field Theory For Non-Equilibrium Network Reconstruction

    Full text link
    There has been recent progress on the problem of inferring the structure of interactions in complex networks when they are in stationary states satisfying detailed balance, but little has been done for non-equilibrium systems. Here we introduce an approach to this problem, considering, as an example, the question of recovering the interactions in an asymmetrically-coupled, synchronously-updated Sherrington-Kirkpatrick model. We derive an exact iterative inversion algorithm and develop efficient approximations based on dynamical mean-field and Thouless-Anderson-Palmer equations that express the interactions in terms of equal-time and one time step-delayed correlation functions.Comment: new version, accepted in PRL. For the Supp. Mat. (ref. 11), please contact the author

    Dynamics and Performance of Susceptibility Propagation on Synthetic Data

    Full text link
    We study the performance and convergence properties of the Susceptibility Propagation (SusP) algorithm for solving the Inverse Ising problem. We first study how the temperature parameter (T) in a Sherrington-Kirkpatrick model generating the data influences the performance and convergence of the algorithm. We find that at the high temperature regime (T>4), the algorithm performs well and its quality is only limited by the quality of the supplied data. In the low temperature regime (T<4), we find that the algorithm typically does not converge, yielding diverging values for the couplings. However, we show that by stopping the algorithm at the right time before divergence becomes serious, good reconstruction can be achieved down to T~2. We then show that dense connectivity, loopiness of the connectivity, and high absolute magnetization all have deteriorating effects on the performance of the algorithm. When absolute magnetization is high, we show that other methods can be work better than SusP. Finally, we show that for neural data with high absolute magnetization, SusP performs less well than TAP inversion.Comment: 9 pages, 7 figure

    Ising Models for Inferring Network Structure From Spike Data

    Full text link
    Now that spike trains from many neurons can be recorded simultaneously, there is a need for methods to decode these data to learn about the networks that these neurons are part of. One approach to this problem is to adjust the parameters of a simple model network to make its spike trains resemble the data as much as possible. The connections in the model network can then give us an idea of how the real neurons that generated the data are connected and how they influence each other. In this chapter we describe how to do this for the simplest kind of model: an Ising network. We derive algorithms for finding the best model connection strengths for fitting a given data set, as well as faster approximate algorithms based on mean field theory. We test the performance of these algorithms on data from model networks and experiments.Comment: To appear in "Principles of Neural Coding", edited by Stefano Panzeri and Rodrigo Quian Quirog
    corecore